不利的童年经历(ACE)定义为在整个儿童期和/或青春期中发生的高度压力和可能创伤的事件或情况的集合。它们已被证明与后来生活中心理健康疾病或其他异常行为的风险增加有关。但是,通过自然语言处理(NLP)从文本数据中识别ACE是具有挑战性的,因为(a)没有NLP准备就绪的本体论; (b)几乎没有用于机器学习的资源,因此需要临床专家的数据注释; (c)域专家和大量文档以支持大型机器学习模型的昂贵注释。在本文中,我们提出了一种本体驱动的自我监督方法(使用基线NLP结果的自动编码器衍生概念嵌入),以产生一种公开可用的资源,该资源将支持大规模的机器学习(例如,培训基于培训变形金刚的大语言,模型)在社交媒体语料库上。该资源以及拟议的方法旨在促进社区培训可转移的NLP模型,以在电子健康记录中的临床注释中在诸如NLP之类的低资源场景中有效地浮出水面。该资源包括ACE本体术语,ACE概念嵌入和NLP注释语料库的列表,请访问https://github.com/knowlab/ace-nlp。
translated by 谷歌翻译
不利的童年经历(ACE)定义为在整个儿童期和/或青春期中发生的高度压力和可能创伤的事件或情况的集合。已显示它们与后来生活中心理健康疾病或其他异常行为的风险增加有关。但是,通过自然语言处理(NLP)从自由文本电子健康记录(EHR)中识别ACE是具有挑战性的,因为(a)没有NLP准备就绪的ACE本体论;(b)有限的机器学习案例,需要从临床专家那里进行数据注释。我们目前正在开发一种工具,该工具将使用NLP技术来帮助我们从临床笔记中浮出水面。这将使我们能够进一步研究确定ACE与随后在大规模和纵向自由文本EHR中的精神疾病(例如成瘾)之间关系之间的关系的证据,以前是不可能的。
translated by 谷歌翻译
计算文本表型是从临床注释中鉴定出患有某些疾病和特征的患者的实践。由于很少有用于机器学习的案例和域专家的数据注释需求,因此难以识别的罕见疾病要确定。我们提出了一种使用本体论和弱监督的方法,并具有来自双向变压器(例如BERT)的最新预训练的上下文表示。基于本体的框架包括两个步骤:(i)文本到umls,通过上下文将提及与统一医学语言系统(UMLS)中的概念链接到命名的实体识别和链接(NER+L)工具,SemeHR中提取表型。 ,以及具有自定义规则和上下文提及表示的弱监督; (ii)UMLS-to-to-ordo,将UMLS概念与孤子罕见疾病本体论(ORDO)中的罕见疾病相匹配。提出了弱监督的方法来学习一个表型确认模型,以改善链接的文本对umls,而没有域专家的注释数据。我们评估了来自美国和英国两个机构的三个出院摘要和放射学报告的临床数据集的方法。我们最好的弱监督方法获得了81.4%的精度和91.4%的召回,从模仿III出院摘要中提取罕见疾病UMLS表型。总体管道处理临床笔记可以表面罕见疾病病例,其中大部分在结构化数据(手动分配的ICD代码)中没有受到平衡。关于模仿III和NHS Tayside的放射学报告的结果与放电摘要一致。我们讨论了弱监督方法的有用性,并提出了未来研究的方向。
translated by 谷歌翻译
临床编码是将患者健康记录中的医疗信息转换为结构化代码的任务,以便它们可用于统计分析。这是一项认知且耗时的任务,遵循标准过程,以达到高水平的一致性。自动化系统可以支持临床编码,以提高该过程的效率和准确性。我们介绍了自动临床编码的想法,并从人工智能(AI)和自然语言处理(NLP)(NLP)的角度总结了挑战,该文献是根据文献,我们在过去两年半(2019年末 - 2022年初)的项目经验),以及与苏格兰和英国的临床编码专家的讨论。我们的研究揭示了应用于临床编码的当前基于深度学习的方法与现实世界实践中的解释性和一致性之间的差距。基于知识的方法代表和推理了标准,可以解释的任务过程,可能需要将其纳入基于深度学习的临床编码方法中。尽管面临技术和组织的挑战,但自动化的临床编码是AI的一项有前途的任务。编码人员需要参与开发过程。在未来五年及以后,开发和部署基于AI的自动化系统需要实现很多目标。
translated by 谷歌翻译
自动化医疗编码,医疗保健操作和交付的基本任务,通过从临床文献预测医学代码来实现非结构化数据。自然语言处理中深入学习模型的最新进展已被广泛应用于此任务。然而,它缺乏对医学编码的神经网络架构设计的统一视图。本综述提出了一个统一的框架,为医疗编码模型的构建块提供了一般性的理解,并概述了近期框架下的最新模型。我们的统一框架将医疗编码分解为四个主要组件,即文本特征提取的编码器模块,为构建深编码器架构的机制,解码器模块,用于将隐藏的表示转换为医学代码,以及辅助信息的使用。最后,我们讨论了关键的研究挑战和未来方向。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译